Current Issue : October-December Volume : 2024 Issue Number : 4 Articles : 5 Articles
Although brain-computer interface (BCI) is considered a revolutionary advancement in human-computer interaction and has achieved significant progress, a considerable gap remains between the current technological capabilities and their practical applications. To promote the translation of BCI into practical applications, the gold standard for online evaluation for classification algorithms of BCI has been proposed in some studies. However, few studies have proposed a more comprehensive evaluation method for the entire online BCI system, and it has not yet received sufficient attention from the BCI research and development community. Therefore, the qualitative leap from analyzing and modeling for offline BCI data to the construction of online BCI systems and optimizing their performance is elaborated, and then user-centred is emphasized, and then the comprehensive evaluation methods for translating BCI into practical applications are detailed and reviewed in the article, including the evaluation of the usability (including effectiveness and efficiency of systems), the evaluation of the user satisfaction (including BCI-related aspects, etc.), and the evaluation of the usage (including the match between the system and user, etc.) of online BCI systems. Finally, the challenges faced in the evaluation of the usability and user satisfaction of online BCI systems, the efficacy of online BCI systems, and the integration of BCI and artificial intelligence (AI) and/ or virtual reality (VR) and other technologies to enhance the intelligence and user experience of the system are discussed. It is expected that the evaluation methods for online BCI systems elaborated in this review will promote the translation of BCI into practical applications....
Rapid serial visual presentation (RSVP) is currently a suitable gaze-independent paradigm for controlling visual brain–computer interfaces (BCIs) based on event-related potentials (ERPs), especially for users with limited eye movement control. However, unlike gaze-dependent paradigms, gaze-independent ones have received less aention concerning the specific choice of visual stimuli that are used. In gaze-dependent BCIs, images of faces—particularly those tinted red—have been shown to be effective stimuli. This study aims to evaluate whether the colour of faces used as visual stimuli influences ERP-BCI performance under RSVP. Fifteen participants tested four conditions that varied only in the visual stimulus used: grey leers (GL), red famous faces with leers (RFF), green famous faces with leers (GFF), and blue famous faces with leers (BFF). The results indicated significant accuracy differences only between the GL and GFF conditions, unlike prior gaze-dependent studies. Additionally, GL achieved higher comfort ratings compared with other face-related conditions. This study highlights that the choice of stimulus type impacts both performance and user comfort, suggesting implications for future ERP-BCI designs for users requiring gaze-independent systems....
Recent studies have highlighted the possibility of using surface electromyographic (EMG) signals to develop human–computer interfaces that are also able to recognize complex motor tasks involving the hand as the handwriting of digits. However, the automatic recognition of words from EMG information has not yet been studied. The aim of this study is to investigate the feasibility of using combined forearm and wrist EMG probes for solving the handwriting recognition problem of 30 words with consolidated machine-learning techniques and aggregating state-of-the-art features extracted in the time and frequency domains. Six healthy subjects, three females and three males aged between 25 and 40 years, were recruited for the study. Two tests in pattern recognition were conducted to assess the possibility of classifying fine hand movements through EMG signals. The first test was designed to assess the feasibility of using consolidated myoelectric control technology with shallow machine-learning methods in the field of handwriting detection. The second test was implemented to assess if specific feature extraction schemes can guarantee high performances with limited complexity of the processing pipeline. Among support vector machine, linear discriminant analysis, and K-nearest neighbours (KNN), the last one showed the best classification performances in the 30-word classification problem, with a mean accuracy of 95% and 85% when using all the features and a specific feature set known as TDAR, respectively. The obtained results confirmed the validity of using combined wrist and forearm EMG data for intelligent handwriting recognition through pattern recognition approaches in real scenarios....
Since their inception more than 50 years ago, Brain-Computer Interfaces (BCIs) have held promise to compensate for functions lost by people with disabilities through allowing direct communication between the brain and external devices. While research throughout the past decades has demonstrated the feasibility of BCI to act as a successful assistive technology, the widespread use of BCI outside the lab is still beyond reach. This can be attributed to a number of challenges that need to be addressed for BCI to be of practical use including limited data availability, limited temporal and spatial resolutions of brain signals recorded non-invasively and intersubject variability. In addition, for a very long time, BCI development has been mainly confined to specific simple brain patterns, while developing other BCI applications relying on complex brain patterns has been proven infeasible. Generative Artificial Intelligence (GAI) has recently emerged as an artificial intelligence domain in which trained models can be used to generate new data with properties resembling that of available data. Given the enhancements observed in other domains that possess similar challenges to BCI development, GAI has been recently employed in a multitude of BCI development applications to generate synthetic brain activity; thereby, augmenting the recorded brain activity. Here, a brief review of the recent adoption of GAI techniques to overcome the aforementioned BCI challenges is provided demonstrating the enhancements achieved using GAI techniques in augmenting limited EEG data, enhancing the spatiotemporal resolution of recorded EEG data, enhancing cross-subject performance of BCI systems and implementing end-to-end BCI applications. GAI could represent the means by which BCI would be transformed into a prevalent assistive technology, thereby improving the quality of life of people with disabilities, and helping in adopting BCI as an emerging human-computer interaction technology for general use....
As social robots may be used by a single user or multiple users different social scenarios are becoming more important for defining human-robot relationships. Therefore, this study explored human-robot relationships between robots and users in different interaction modes to improve user interaction experience. Specifically, education and companion were selected as the most common areas in the use of social robots. The interaction modes used include single-user interaction and multiuser interaction. The three human-robot relationships were adopted. The robot competence scale, human-robot trust scale, and acceptance of robot scale were used to evaluate subjects’ views on robots. The results demonstrate that in the two scenarios, people were more inclined to maintain a more familiar and closer relationship with the social robot when the robot interacted with a single user. When multiple persons interact in an education scenario, setting the robot to Acquaintance relationships is recommended to improve its competence and people’s trust in the robot. Similarly, in multi-person interaction, Acquaintance relationships would be more accepted and trusted by people in a companion scenario. Based on these results, robot sensors can be added to further optimize human-robot interaction sensing systems. By identifying the number of users in the interaction environment, robots can automatically employ the best human-robot relationship for interaction. Optimizing human-robot interaction sensing systems can also improve robot performance perceived in the interaction to meet different users’ needs and achieve more natural human-robot interaction experiences....
Loading....